Goto

Collaborating Authors

 court case


NyayaAnumana & INLegalLlama: The Largest Indian Legal Judgment Prediction Dataset and Specialized Language Model for Enhanced Decision Analysis

Nigam, Shubham Kumar, Patnaik, Balaramamahanthi Deepak, Mishra, Shivam, Shallum, Noel, Ghosh, Kripabandhu, Bhattacharya, Arnab

arXiv.org Artificial Intelligence

The integration of artificial intelligence (AI) in legal judgment prediction (LJP) has the potential to transform the legal landscape, particularly in jurisdictions like India, where a significant backlog of cases burdens the legal system. This paper introduces NyayaAnumana, the largest and most diverse corpus of Indian legal cases compiled for LJP, encompassing a total of 7,02,945 preprocessed cases. NyayaAnumana, which combines the words "Nyay" (judgment) and "Anuman" (prediction or inference) respectively for most major Indian languages, includes a wide range of cases from the Supreme Court, High Courts, Tribunal Courts, District Courts, and Daily Orders and, thus, provides unparalleled diversity and coverage. Our dataset surpasses existing datasets like PredEx and ILDC, offering a comprehensive foundation for advanced AI research in the legal domain. In addition to the dataset, we present INLegalLlama, a domain-specific generative large language model (LLM) tailored to the intricacies of the Indian legal system. It is developed through a two-phase training approach over a base LLaMa model. First, Indian legal documents are injected using continual pretraining. Second, task-specific supervised finetuning is done. This method allows the model to achieve a deeper understanding of legal contexts. Our experiments demonstrate that incorporating diverse court data significantly boosts model accuracy, achieving approximately 90% F1-score in prediction tasks. INLegalLlama not only improves prediction accuracy but also offers comprehensible explanations, addressing the need for explainability in AI-assisted legal decisions.


Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory

Li, Haoran, Fan, Wei, Chen, Yulin, Cheng, Jiayang, Chu, Tianshu, Zhou, Xuebing, Hu, Peizhao, Song, Yangqiu

arXiv.org Artificial Intelligence

Privacy research has attracted wide attention as individuals worry that their private data can be easily leaked during interactions with smart devices, social platforms, and AI applications. Computer science researchers, on the other hand, commonly study privacy issues through privacy attacks and defenses on segmented fields. Privacy research is conducted on various sub-fields, including Computer Vision (CV), Natural Language Processing (NLP), and Computer Networks. Within each field, privacy has its own formulation. Though pioneering works on attacks and defenses reveal sensitive privacy issues, they are narrowly trapped and cannot fully cover people's actual privacy concerns. Consequently, the research on general and human-centric privacy research remains rather unexplored. In this paper, we formulate the privacy issue as a reasoning problem rather than simple pattern matching. We ground on the Contextual Integrity (CI) theory which posits that people's perceptions of privacy are highly correlated with the corresponding social context. Based on such an assumption, we develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations. Unlike prior works on CI that either cover limited expert annotated norms or model incomplete social context, our proposed privacy checklist uses the whole Health Insurance Portability and Accountability Act of 1996 (HIPAA) as an example, to show that we can resort to large language models (LLMs) to completely cover the HIPAA's regulations. Additionally, our checklist also gathers expert annotations across multiple ontologies to determine private information including but not limited to personally identifiable information (PII). We use our preliminary results on the HIPAA to shed light on future context-centric privacy research to cover more privacy regulations, social norms and standards.


Artificial Intelligence (AI) in Legal Data Mining

Deroy, Aniket, Bailung, Naksatra Kumar, Ghosh, Kripabandhu, Ghosh, Saptarshi, Chakraborty, Abhijnan

arXiv.org Artificial Intelligence

Despite the availability of vast amounts of data, legal data is often unstructured, making it difficult even for law practitioners to ingest and comprehend the same. It is important to organise the legal information in a way that is useful for practitioners and downstream automation tasks. The word ontology was used by Greek philosophers to discuss concepts of existence, being, becoming and reality. Today, scientists use this term to describe the relation between concepts, data, and entities. A great example for a working ontology was developed by Dhani and Bhatt. This ontology deals with Indian court cases on intellectual property rights (IPR) The future of legal ontologies is likely to be handled by computer experts and legal experts alike.


Lawyer in hot water after using AI to present made up information: 'incompetent'

FOX News

A New York lawyer could face discipline after it was discovered a case she cited was generated by artificial intelligence and did not actually exist. The 2nd U.S. Circuit Court of Appeals ordered lawyer Jae Lee to its grievance panel last week after discovering she used OpenAI's ChatGPT to research prior cases for a medical malpractice lawsuit but failed to confirm whether the case she was citing actually existed, according to a report from Reuters. The attorney included the fictitious state court decision in an appeal for her client's lawsuit claiming that a Queens doctor botched an abortion, according to the report, leading the court to order that Lee submit a copy of the decision that the lawyer later found she was "unable to furnish." The lawyer's conduct "falls well below the basic obligations of counsel," the 2nd U.S. Circuit Court of Appeals concluded in its disciplinary review, which was sent to Lee. Lee would later admit to using a case that was "suggested" to her by ChatGPT, a popular AI chatbot, and failing to verify the results herself. The lawyer's decision to use the popular application comes even though experts have warned against such practices, noting that AI is a relatively new technology that also is well-known for "hallucinating" false or misleading results.


Lawyers brace for AI's potential to upend court cases with phony evidence

FOX News

"Gutfeld!" panelists weigh in on the rise of video and audio clips made using artificial intelligence tools to mimic the voice and the likeness of anyone you want. Images generated by artificial intelligence are becoming more convincing and prevalent, and they could lead to more complicated court cases if the synthetic media is submitted as evidence, legal experts say. "Deepfakes" often involve editing videos or photos of people to make them look like someone else by using deep-learning AI. The technology broadly hit the public's radar in 2017 after a Reddit user posted realistic-looking pornography of celebrities to the platform. The pornography was revealed to be doctored, but the revolutionary tech has only become more realistic and easier to make in the years since.


AI lawyer stunt off after CEO threatened with jail • The Register

#artificialintelligence

In brief Joshua Browder, CEO of DoNotPay, made headlines for claiming an AI chatbot was due to defend a man in an upcoming court hearing, but has pulled out of the stunt. Browder runs a consumer rights startup that was originally built to help people appeal parking tickets more easily, and has since grown with the aim of building "the world's first robot lawyer." He wanted to show AI could replace expensive human lawyers, using language models to form legal arguments. Earlier this month he claimed to have convinced a man to wear headphones during a court case and recite the output of an AI chatbot in a court hearing scheduled to take place over Zoom. But his behavior caught the attention of prosecutors irked by his reckless antics.


Robot Lawyer Stunt Cancelled After Human Lawyers Objected

#artificialintelligence

DoNotPay has cancelled plans to have its AI-powered "robot lawyer" represent a defendant in a U.S. court after several human lawyer organizations objected to the experiment, according to company founder and CEO Joshua Browder. Browder hoped to make history by becoming the first lawyer to use artificial intelligence (AI) to argue a case in a court of law. As MetaNews previously reported, the plan was to use the company's AI chatbot in a traffic case scheduled for Feb. 22. "After receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom," he tweeted on Jan. 25. "DoNotPay is postponing our court case and sticking to consumer rights." Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom.


AI-powered "robot" lawyer won't argue in court after jail threats - CBS News

#artificialintelligence

A "robot" lawyer powered by artificial intelligence was set to be the first of its kind to help a defendant fight a traffic ticket in court next month. But the experiment has been scrapped after "State Bar prosecutors" threatened the man behind the company that created the chatbot with prison time. Joshua Browder, CEO of DoNotPay, on Wednesday tweeted that his company "is postponing our court case and sticking to consumer rights." Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom. Browder also said he will not be sending the company's robot lawyer to court.


DoNotPay Offers Lawyers $1M to Let Its AI Argue Before Supreme Court

#artificialintelligence

A legal services company says it's willing to pay $1 million to fuck around and find out. On Sunday, DoNotPay CEO Joshua Browder made a wild proposition to any lawyer slated to argue an upcoming case in front of the U.S. Supreme Court. Let DoNotPay's AI lawyer, which is built on OpenAI's viral GPT-3 API, argue the case before the court, Browder said, in exchange for $1 million. All the human lawyer would need to do is wear AirPods and repeat to the court what DoNotPay's robot lawyer argues. "DoNotPay will pay any lawyer or person $1,000,000 with an upcoming case in front of the United States Supreme Court to wear AirPods and let our robot lawyer argue the case by repeating exactly what it says," Browder wrote on Twitter on Sunday night.


DoNotPay's 'Robot Lawyer' Is Gearing Up for Its First U.S. Court Case

#artificialintelligence

An AI-based legal advisor is set to play the role of a lawyer in an actual court case for the first time. Via an earpiece, the artificial intelligence will coach a courtroom defendant on what to say to get out of the associated fines and consequences of a speeding charge, AI-company DoNotPay has claimed in a report initially from New Scientist and confirmed by Gizmodo. The in-person speeding ticket hearing is scheduled to take place in a U.S. courtroom (specifically, not California) sometime in February, DoNotPay's founder and CEO Joshua Browder told Gizmodo in a phone call. However, Browder and the company wouldn't provide any further case details to protect the defendant's privacy. DoNotPay is also reticent to disclose case specifics because what they're doing is likely in violation of courtroom laws and protocol.